20 research outputs found

    Partial convolution based multimodal autoencoder for art investigation

    Get PDF
    Autoencoders have been widely used in applications with limited annotations to extract features in an unsupervised manner, pre-processing the data to be used in machine learning models. This is especially helpful in image processing for art investigation where annotated data is scarce and difficult to collect. We introduce a structural similarity index based loss function to train the autoencoder for image data. By extending the recently developed partial convolution to partial deconvolution, we construct a fully partial convolutional autoencoder (FP-CAE) and adapt it to multimodal data, typically utilized in art investigation. Experimental results on images of the Ghent Altarpieceshow that our method significantly suppresses edge artifacts and improves the overall reconstruction performance. The proposed FP-CAE can be used for data preprocessing in craquelure detection and other art investigation tasks in future studies

    Fast switching cholesteric liquid crystal optical beam deflector with polarization independence

    Get PDF
    Optical beam deflectors based on the combination of cholesteric liquid crystals and polymer micro gratings are reported. Dual frequency cholesteric liquid crystal (DFCh-LC) is adopted to accelerate the switching from the homeotropic state back to the planar state. Polarization independent beam steering components are realized whose transmission versus the polarizing angle only varies 4.4% and 2.6% for the planar state and the homeotropic state, respectively. A response time of 451 ms is achieved for DFCh-LC-grating beam deflectors, which is fast compared to other nematic LC beam steerers with similar LC thickness

    Assisting classical paintings restoration : efficient paint loss detection and descriptor-based inpainting using shared pretraining

    Get PDF
    In the restoration process of classical paintings, one of the tasks is to map paint loss for documentation and analysing purposes. Because this is such a sizable and tedious job automatic techniques are highly on demand. The currently available tools allow only rough mapping of the paint loss areas while still requiring considerable manual work. We develop here a learning method for paint loss detection that makes use of multimodal image acquisitions and we apply it within the current restoration of the Ghent Altarpiece. Our neural network architecture is inspired by a multiscale convolutional neural network known as U-Net. In our proposed model, the downsampling of the pooling layers is omitted to enforce translation invariance and the convolutional layers are replaced with dilated convolutions. The dilated convolutions lead to denser computations and improved classification accuracy. Moreover, the proposed method is designed such to make use of multimodal data, which are nowadays routinely acquired during the restoration of master paintings, and which allow more accurate detection of features of interest, including paint losses. Our focus is on developing a robust approach with minimal user-interference. Adequate transfer learning is here crucial in order to extend the applicability of pre-trained models to the paintings that were not included in the training set, with only modest additional re-training. We introduce a pre-training strategy based on a multimodal, convolutional autoencoder and we fine-tune the model when applying it to other paintings. We evaluate the results by comparing the detected paint loss maps to manual expert annotations and also by running virtual inpainting based on the detected paint losses and comparing the virtually inpainted results with the actual physical restorations. The results indicate clearly the efficacy of the proposed method and its potential to assist in the art conservation and restoration processes

    Deep learning for paint loss detection with a multiscale, translation invariant network

    No full text
    We explore the potential of deep learning in digital painting analysis to facilitate condition reporting and to support restoration treatments. We address the problem of paint loss detection and develop a multiscale deep learning system with dilated convolutions that enables a large receptive field with limited training parameters to avoid overtraining. Our model handles efficiently multimodal data that are typically acquired in art investigation. As a case study we use multimodal data of the Ghent Altarpiece. Our results indicate huge potential of the proposed approach in terms of accuracy and also its fast execution, which allows interactivity and continuous learning

    Autoencoder-learned local image descriptor for image inpainting

    Get PDF
    In this paper, we propose an efficient method for learning local image descriptors suitable for the use in image inpainting algorithms. We learn the descriptors using a convolutional autoencoder network that we design such that the network produces a computationally efficient extraction of patch descriptors through an intermediate image representation. This approach saves computational memory and time in comparison to existing methods when used with algorithms that require patch search and matching within a single image. We show these benefits by integrating our descriptor into an inpainting algorithm and comparing it to the existing autoencoder-based descriptor. We also show results indicating that our descriptor improves the robustness to missing areas of the patches
    corecore